Goto

Collaborating Authors

 neural mean-field dynamic


Review for NeurIPS paper: Network Diffusions via Neural Mean-Field Dynamics

Neural Information Processing Systems

Weaknesses: i) Many design choices appear rather arbitrary in section 3.2, e.g. Since \epsilon is a black box, can we not get rid of the linear part from the beginning? Such an optimal control connection can be done for any parameter inference problem. From my understanding, the parameters are trained by minimizing loss function 18a with gradient descent. However if PMP is used, the solution quality heavily depends on the chosen solver method to the PMP problem, e.g. A discussion about such is missing.


Review for NeurIPS paper: Network Diffusions via Neural Mean-Field Dynamics

Neural Information Processing Systems

The paper investigates how, in a diffusion process, the term accounting for the influence of the whole past, can be modelled in terms of temporal convolution and approximated via a recurrent neural network. The AC thinks that this topic is fully relevant to NeurIPS, and would be of utmost interest for an (admittedly small) fraction of the audience. However, the authors must make every effort to make their work accessible (even for statistical physicists; the Mori-Zwanzig fomalism is perhaps not as well known as the authors think, to say the least) and to thoroughly show how scalable the approach is compared to alternatives. The AC dearly hopes that the authors will invest in the pedagogical and writing efforts required to make their work known to the ML community. Additional emergency review In this paper the authors predict an epidemic process on an unknown network by combining the Generalized Langevin equation (GLE) based on the Mori-Zwanzig formalism, together with deep learning.


Network Diffusions via Neural Mean-Field Dynamics

Neural Information Processing Systems

We propose a novel learning framework based on neural mean-field dynamics for inference and estimation problems of diffusion on networks. Our new framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities, which renders a delay differential equation with memory integral approximated by learnable time convolution operators, resulting in a highly structured and interpretable RNN. Directly using cascade data, our framework can jointly learn the structure of the diffusion network and the evolution of infection probabilities, which are cornerstone to important downstream applications such as influence maximization. Connections between parameter learning and optimal control are also established. Empirical study shows that our approach is versatile and robust to variations of the underlying diffusion network models, and significantly outperform existing approaches in accuracy and efficiency on both synthetic and real-world data.